14 research outputs found

    Deciding Orthogonality in Construction-A Lattices

    Get PDF
    Lattices are discrete mathematical objects with widespread applications to integer programs as well as modern cryptography. A fundamental problem in both domains is the Closest Vector Problem (popularly known as CVP). It is well-known that CVP can be easily solved in lattices that have an orthogonal basis \emph{if} the orthogonal basis is specified. This motivates the orthogonality decision problem: verify whether a given lattice has an orthogonal basis. Surprisingly, the orthogonality decision problem is not known to be either NP-complete or in P. In this paper, we focus on the orthogonality decision problem for a well-known family of lattices, namely Construction-A lattices. These are lattices of the form C+qZnC+q\mathbb{Z}^n, where CC is an error-correcting qq-ary code, and are studied in communication settings. We provide a complete characterization of lattices obtained from binary and ternary codes using Construction-A that have an orthogonal basis. We use this characterization to give an efficient algorithm to solve the orthogonality decision problem. Our algorithm also finds an orthogonal basis if one exists for this family of lattices. We believe that these results could provide a better understanding of the complexity of the orthogonality decision problem for general lattices

    Nearly Optimal Sparse Group Testing

    Full text link
    Group testing is the process of pooling arbitrary subsets from a set of nn items so as to identify, with a minimal number of tests, a "small" subset of dd defective items. In "classical" non-adaptive group testing, it is known that when dd is substantially smaller than nn, Θ(dlog(n))\Theta(d\log(n)) tests are both information-theoretically necessary and sufficient to guarantee recovery with high probability. Group testing schemes in the literature meeting this bound require most items to be tested Ω(log(n))\Omega(\log(n)) times, and most tests to incorporate Ω(n/d)\Omega(n/d) items. Motivated by physical considerations, we study group testing models in which the testing procedure is constrained to be "sparse". Specifically, we consider (separately) scenarios in which (a) items are finitely divisible and hence may participate in at most γo(log(n))\gamma \in o(\log(n)) tests; or (b) tests are size-constrained to pool no more than ρo(n/d)\rho \in o(n/d)items per test. For both scenarios we provide information-theoretic lower bounds on the number of tests required to guarantee high probability recovery. In both scenarios we provide both randomized constructions (under both ϵ\epsilon-error and zero-error reconstruction guarantees) and explicit constructions of designs with computationally efficient reconstruction algorithms that require a number of tests that are optimal up to constant or small polynomial factors in some regimes of n,d,γ,n, d, \gamma, and ρ\rho. The randomized design/reconstruction algorithm in the ρ\rho-sized test scenario is universal -- independent of the value of dd, as long as ρo(n/d)\rho \in o(n/d). We also investigate the effect of unreliability/noise in test outcomes. For the full abstract, please see the full text PDF

    Brief Announcement: Relaxed Locally Correctable Codes in Computationally Bounded Channels

    Get PDF
    We study variants of locally decodable and locally correctable codes in computationally bounded, adversarial channels, under the assumption that collision-resistant hash functions exist, and with no public-key or private-key cryptographic setup. Specifically, we provide constructions of relaxed locally correctable and relaxed locally decodable codes over the binary alphabet, with constant information rate, and poly-logarithmic locality. Our constructions compare favorably with existing schemes built under much stronger cryptographic assumptions, and with their classical analogues in the computationally unbounded, Hamming channel. Our constructions crucially employ collision-resistant hash functions and local expander graphs, extending ideas from recent cryptographic constructions of memory-hard functions

    Lattice-based locality sensitive hashing is optimal

    Get PDF
    Locality sensitive hashing (LSH) was introduced by Indyk and Motwani (STOC ‘98) to give the first sublinear time algorithm for the c-approximate nearest neighbor (ANN) problem using only polynomial space. At a high level, an LSH family hashes “nearby” points to the same bucket and “far away” points to different buckets. The quality of measure of an LSH family is its LSH exponent, which helps determine both query time and space usage. In a seminal work, Andoni and Indyk (FOCS ‘06) constructed an LSH family based on random ball partitionings of space that achieves an LSH exponent of 1/c2 for the ℓ2 norm, which was later shown to be optimal by Motwani, Naor and Panigrahy (SIDMA ‘07) and O’Donnell, Wu and Zhou (TOCT ‘14). Although optimal in the LSH exponent, the ball partitioning approach is computationally expensive. So, in the same work, Andoni and Indyk proposed a simpler and more practical hashing scheme based on Euclidean lattices and provided computational results using the 24-dimensional Leech lattice. However, no theoretical analysis of the scheme was given, thus leaving open the question of finding the exponent of lattice based LSH. In this work, we resolve this question by showing the existence of lattices achieving the optimal LSH exponent of 1/c2 using techniques from the geometry of numbers. At a more conceptual level, our results show that optimal LSH space partitions can have periodic structure. Understanding the extent to which additional structure can be imposed on these partitions, e.g. to yield low space and query complexity, remains an important open problem

    Lattice-based locality sensitive hashing is optimal

    Get PDF
    Locality sensitive hashing (LSH) was introduced by Indyk and Motwani (STOC ‘98) to give the first sublinear time algorithm for the c-approximate nearest neighbor (ANN) problem using only polynomial space. At a high level, an LSH family hashes “nearby” points to the same bucket and “far away” points to different buckets. The quality of measure of an LSH family is its LSH exponent, which helps determine both query time and space usage. In a seminal work, Andoni and Indyk (FOCS ‘06) constructed an LSH family based on random ball partitionings of space that achieves an LSH exponent of 1/c2 for the l2 norm, which was later shown to be optimal by Motwani, Naor and Panigrahy (SIDMA ‘07) and O’Donnell, Wu and Zhou (TOCT ‘14). Although optimal in the LSH exponent, the ball partitioning approach is computationally expensive. So, in the same work, Andoni and Indyk proposed a simpler and more practical hashing scheme based on Euclidean lattices and provided computational results using the 24-dimensional Leech lattice. However, no theoretical analysis of the scheme was given, thus leaving open the question of finding the exponent of lattice based LSH. In this work, we resolve this question by showing the existence of lattices achieving the optimal LSH exponent of 1/c2 using techniques from the geometry of numbers. At a more conceptual level, our results show that optimal LSH space partitions can have periodic structure. Understanding the extent to which additional structure can be imposed on these partitions, e.g. to yield low space and query complexity, remains an important open problem

    Local and Global Computation on Algebraic Data

    No full text
    Point lattices and error-correcting codes are algebraic structures with numerous applications in communication, storage, and cryptography. In this dissertation we study error-correcting codes and lattices in the following models: classical global models, in which the algorithm can access its entire input, and local models, in which the algorithm has partial access to its input. We design fast algorithms that can reliably recover data from adversarial noise corruption, and show fundamental limitations of computation in these models. The challenging problems in coding theory revolve around the following nearest- neighbor search abstraction: given a collection of points with some special properties, and a target point, find a special point close to the target. In our case, the collection of special points form lattices or error-correcting codes, and the search problems refer to notions of decoding to a near lattice point, or a near codeword. Some well-known examples, which we also study here in local or global variants, include the Closest Vector Problem, and Bounded Distance Decoding problems. In the global model, we propose efficient algorithms for solving the Closest Vector Problem in some special Construction-A lattices, a problem known to be hard in general. Further, we make progress on a long-standing open problem regarding decoding Reed-Solomon codes, by showing NP-hardness results for an asymptotic noise range. Motivated by applications to linear programming and cryptography, we propose the notion of local testing for membership in point lattices, extending the classical notion of local testing for error-correcting codes to the Euclidean space. We design local algorithms for classical families of lattices and show several impossibility results
    corecore